research quality
Can Small and Reasoning Large Language Models Score Journal Articles for Research Quality and Do Averaging and Few-shot Help?
Thelwall, Mike, Mohammadi, Ehsan
Assessing published academic journal articles is a common task for evaluations of departments and individuals. Whilst it is sometimes supported by citation data, Large Language Models (LLMs) may give more useful indications of article quality. Evidence of this capability exists for two of the largest LLM families, ChatGPT and Gemini, and the medium sized LLM Gemma3 27b, but it is unclear whether smaller LLMs and reasoning models have similar abilities. This is important because larger models may be slow and impractical in some situations, and reasoning models may perform differently. Four relevant questions are addressed with Gemma3 variants, Llama4 Scout, Qwen3, Magistral Small and DeepSeek R1, on a dataset of 2,780 medical, health and life science papers in 6 fields, with two different gold standards, one novel. The results suggest that smaller (open weights) and reasoning LLMs have similar performance to ChatGPT 4o-mini and Gemini 2.0 Flash, but that 1b parameters may often, and 4b sometimes, be too few. Moreover, averaging scores from multiple identical queries seems to be a universally successful strategy, and few-shot prompts (four examples) tended to help but the evidence was equivocal. Reasoning models did not have a clear advantage. Overall, the results show, for the first time, that smaller LLMs >4b, including reasoning models, have a substantial capability to score journal articles for research quality, especially if score averaging is used.
- North America > United States > South Carolina > Richland County > Columbia (0.04)
- Europe > United Kingdom > England > South Yorkshire > Sheffield (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia > Armenia > Yerevan > Yerevan (0.04)
Can Smaller Large Language Models Evaluate Research Quality?
Research evaluation is a common and important task for academics and managers, and it is often supported by citation - based indicators (Hicks et al., 2015; Moed, 2005; Mukherjee, 2022). With the increasingly widespread use of Artificial Intelligence (AI) in research ( Mohammadi et al., 2025), it is important to check whether it can save expert time through support of the research evaluation task. ChatGPT research quality score estimates for journal articles are recent alternative s to citations as quantitative indicator s to support evaluations ( Kousha & Thelwall, 2025) . Their value lies in their positive correlation with expert judgement in all or nearly all fields, and at a slightly higher rate than for citation - based indicators ( Thelwall, 2025abc). Despite some systematic biases or disparities ( Thelwall & Kurt, 2025), t his property means that they are helpful when expert judgement fails, such as fo r areas outside of the assessor's expertise, as a cross - check for bias, and for evaluations where assessment expertise is unavailable or too expensive for the value of the task (Thelwall, 2025d) . Whilst a positive correlation with expert judgement has been established for three of the largest Large Language Models (LLMs) in 2025, ChatGPT 4o, ChatGPT 4o - mini, and Google Gemini Flash 1.5 ( Thelwall, 2025ac), these are all cloud - based services and may be too expensive or not private enough for some research evaluation purposes ( Nowak et al., 2025) . Moreover, cloud - based services can be withdrawn, updated, or made more costly, so research evaluation procedures may not be able to rely on them. Thus, there is a need to test whether any smaller "open weights" LLMs ( Sowe et al., 2024) that can be downloaded and used offline have a capability to estimate research quality.
- Europe > United Kingdom (0.14)
- Europe > Netherlands > South Holland > Leiden (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- (2 more...)
Evaluating the quality of published medical research with ChatGPT
Thelwall, Mike, Jiang, Xiaorui, Bath, Peter A.
Research quality evaluation is important for departmental evaluations and academic career decisions. Unfortunately, the evaluators may not have time to fully read the work assessed and may instead rely on the reputation or Journal Impact Factor of the publishing journals, on the citation counts for individual articles, or on the reputation or career citations of the author. Whilst journal-based evidence is not optimal (Waltman & Traag, 2021), the main article-level indicator, citation counts, only directly reflects the scholarly impact of work and not its rigour, originality, and societal impacts (Aksnes, et al., 2019), all of which are relevant quality dimensions (Langfeldt et al., 2020). Moreover, article citation counts are ineffective for newer articles (Wang, 2013). In response, attempts to use Large Language Models (LLMs) to evaluate the quality of academic work have shown that ChatGPT quality scores are at least as effective as citation counts in most fields and substantially better in a few (Thelwall & Yaghi, 2024). Medicine is an exception, however, with ChatGPT research quality scores having a small negative correlation with the mean scores of the submitting department in the Research Excellence Framework (REF) Clinical Medicine Unit of Assessment (UoA) (Thelwall, 2024ab; Thelwall & Yaghi, 2024).
- Europe > United Kingdom > England > Leicestershire > Leicester (0.05)
- Europe > United Kingdom > England > South Yorkshire > Sheffield (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > Marin County > San Rafael (0.04)
- Research Report > Strength High (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
Evaluating Research Quality with Large Language Models: An Analysis of ChatGPT's Effectiveness with Different Settings and Inputs
Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises, appointments and promotion. It is therefore important to investigate whether Large Language Models (LLMs) can play a role in this process. This article assesses which ChatGPT inputs (full text without tables, figures and references; title and abstract; title only) produce better quality score estimates, and the extent to which scores are affected by ChatGPT models and system prompts. The results show that the optimal input is the article title and abstract, with average ChatGPT scores based on these (30 iterations on a dataset of 51 papers) correlating at 0.67 with human scores, the highest ever reported. ChatGPT 4o is slightly better than 3.5-turbo (0.66), and 4o-mini (0.66). The results suggest that article full texts might confuse LLM research quality evaluations, even though complex system instructions for the task are more effective than simple ones. Thus, whilst abstracts contain insufficient information for a thorough assessment of rigour, they may contain strong pointers about originality and significance. Finally, linear regression can be used to convert the model scores into the human scale scores, which is 31% more accurate than guessing.
- Oceania > New Zealand (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > South Yorkshire > Sheffield (0.04)
- (2 more...)
Can ChatGPT evaluate research quality?
Purpose: Assess whether ChatGPT 4.0 is accurate enough to perform research evaluations on journal articles to automate this time-consuming task. Design/methodology/approach: Test the extent to which ChatGPT-4 can assess the quality of journal articles using a case study of the published scoring guidelines of the UK Research Excellence Framework (REF) 2021 to create a research evaluation ChatGPT. This was applied to 51 of my own articles and compared against my own quality judgements. Findings: ChatGPT-4 can produce plausible document summaries and quality evaluation rationales that match the REF criteria. Its overall scores have weak correlations with my self-evaluation scores of the same documents (averaging r=0.281 over 15 iterations, with 8 being statistically significantly different from 0). In contrast, the average scores from the 15 iterations produced a statistically significant positive correlation of 0.509. Thus, averaging scores from multiple ChatGPT-4 rounds seems more effective than individual scores. The positive correlation may be due to ChatGPT being able to extract the author's significance, rigour, and originality claims from inside each paper. If my weakest articles are removed, then the correlation with average scores (r=0.200) falls below statistical significance, suggesting that ChatGPT struggles to make fine-grained evaluations. Research limitations: The data is self-evaluations of a convenience sample of articles from one academic in one field. Practical implications: Overall, ChatGPT does not yet seem to be accurate enough to be trusted for any formal or informal research quality evaluation tasks. Research evaluators, including journal editors, should therefore take steps to control its use. Originality/value: This is the first published attempt at post-publication expert review accuracy testing for ChatGPT.
- Oceania > New Zealand (0.04)
- Europe > United Kingdom > England > South Yorkshire > Sheffield (0.04)
- Europe > Italy (0.04)
- Research Report > Experimental Study (0.66)
- Research Report > New Finding (0.46)
Has China caught up to the US in AI research? An exploration of mimetic isomorphism as a model for late industrializers
Min, Chao, Zhao, Yi, Bu, Yi, Ding, Ying, Wagner, Caroline S.
Artificial Intelligence (AI), a cornerstone of 21st-century technology, has seen remarkable growth in China. In this paper, we examine China's AI development process, demonstrating that it is characterized by rapid learning and differentiation, surpassing the export-oriented growth propelled by Foreign Direct Investment seen in earlier Asian industrializers. Our data indicates that China currently leads the USA in the volume of AI-related research papers. However, when we delve into the quality of these papers based on specific metrics, the USA retains a slight edge. Nevertheless, the pace and scale of China's AI development remain noteworthy. We attribute China's accelerated AI progress to several factors, including global trends favoring open access to algorithms and research papers, contributions from China's broad diaspora and returnees, and relatively lax data protection policies. In the vein of our research, we have developed a novel measure for gauging China's imitation of US research. Our analysis shows that by 2018, the time lag between China and the USA in addressing AI research topics had evaporated. This finding suggests that China has effectively bridged a significant knowledge gap and could potentially be setting out on an independent research trajectory. While this study compares China and the USA exclusively, it's important to note that research collaborations between these two nations have resulted in more highly cited work than those produced by either country independently. This underscores the power of international cooperation in driving scientific progress in AI.
- Asia > South Korea (0.05)
- Asia > Japan (0.05)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- (15 more...)
- Overview (1.00)
- Research Report > New Finding (0.87)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.92)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.46)
AI system not yet ready to help peer reviewers assess research quality
Artificial intelligence could eventually help to award scores to the tens of thousands of papers submitted to the Research Excellence Framework by UK universities.Credit: Yuichiro Chino/Getty Researchers tasked with examining whether artificial intelligence (AI) technology could assist in the peer review of journal articles submitted to the United Kingdom's Research Excellence Framework (REF) say the system is not yet accurate enough to aid human assessment, and recommend further testing in a large-scale pilot scheme. The team's findings, published on 12 December, show that the AI system generated identical scores to human peer reviewers up to 72% of the time. When averaged out over the multiple submissions made by some institutions across a broad range of the 34 subject-based'units of assessment' that make up the REF, "the correlation between the human score and the AI score was very high", says data scientist Mike Thelwall at the University of Wolverhampton, UK, who is a co-author of the report. In its current form, however, the tool is most useful when assessing research output from institutions that submit a lot of articles to the REF, Thelwall says. It is less useful for smaller universities that submit only a handful of articles.
Should AI have a role in assessing research quality?
CERN, Europe's particle-physics laboratory, produces vast amounts of data, which are stored at its computer centre (pictured) and analysed with the help of artifical intelligence (AI). UK funders want to know whether AI could also assist in peer reviewing thousands of research outputs for nationwide quality audits.Credit: Dean Mouhtaropoulos/Getty Efforts to ease the workloads of peer reviewers by using artificial intelligence (AI) are gathering pace -- with one country's main research-evaluation exercise actively looking into ways of harnessing the technology. A study commissioned by the United Kingdom's main public research-funding bodies is examining how algorithms can assist in conducting peer review on journal articles submitted to the UK's Research Excellence Framework (REF). The REF, a national quality audit that measures the impact of research carried out at UK higher-education institutions, is a huge undertaking. In the latest iteration, the results of which were published in May 2022, more than 185,000 research outputs were evaluated from more than 76,000 academics based at 157 UK institutions.
- North America > United States > Illinois > Cook County > Chicago (0.05)
- Europe > United Kingdom > Wales (0.05)
- Europe > United Kingdom > Northern Ireland (0.05)
- (3 more...)